Conditional Entropy
   HOME

TheInfoList



OR:

In
information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
, the conditional entropy quantifies the amount of information needed to describe the outcome of a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
Y given that the value of another random variable X is known. Here, information is measured in shannons,
nat Nat or NAT may refer to: Computing * Network address translation (NAT), in computer networking Organizations * National Actors Theatre, New York City, U.S. * National AIDS trust, a British charity * National Archives of Thailand * National As ...
s, or
hartley Hartley may refer to: Places Australia *Hartley, New South Wales *Hartley, South Australia **Electoral district of Hartley, a state electoral district Canada *Hartley Bay, British Columbia United Kingdom *Hartley, Cumbria *Hartley, Plymou ...
s. The ''entropy of Y conditioned on X'' is written as \Eta(Y, X).


Definition

The conditional entropy of Y given X is defined as where \mathcal X and \mathcal Y denote the support sets of X and Y. ''Note:'' Here, the convention is that the expression 0 \log 0 should be treated as being equal to zero. This is because \lim_ \theta\, \log \theta = 0. Intuitively, notice that by definition of
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
and of
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occur ...
, \displaystyle H(Y, X) can be written as H(Y, X) = \mathbb (X,Y)/math>, where f is defined as \displaystyle f(x,y) := -\log\Big(\frac\Big) = -\log(p(y, x)). One can think of \displaystyle f as associating each pair \displaystyle (x, y) with a quantity measuring the information content of \displaystyle (Y=y) given \displaystyle (X=x). This quantity is directly related to the amount of information needed to describe the event \displaystyle (Y=y) given (X=x). Hence by computing the expected value of \displaystyle f over all pairs of values (x, y) \in \mathcal \times \mathcal, the conditional entropy \displaystyle H(Y, X) measures how much information, on average, the variable X encodes about Y .


Motivation

Let \Eta(Y, X=x) be the
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynam ...
of the discrete random variable Y conditioned on the discrete random variable X taking a certain value x. Denote the support sets of X and Y by \mathcal X and \mathcal Y. Let Y have
probability mass function In probability and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete density function. The probability mass ...
p_Y. The unconditional entropy of Y is calculated as \Eta(Y) := \mathbb operatorname(Y)/math>, i.e. :\Eta(Y) = \sum_ = -\sum_ , where \operatorname(y_i) is the
information content In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative wa ...
of the outcome of Y taking the value y_i. The entropy of Y conditioned on X taking the value x is defined analogously by
conditional expectation In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take “on average” over an arbitrarily large number of occurrences – give ...
: :\Eta(Y, X=x) = -\sum_ . Note that \Eta(Y, X) is the result of averaging \Eta(Y, X=x) over all possible values x that X may take. Also, if the above sum is taken over a sample y_1, \dots, y_n, the expected value E_X \Eta(y_1, \dots, y_n \mid X = x)/math> is known in some domains as equivocation. Given discrete random variables X with image \mathcal X and Y with image \mathcal Y, the conditional entropy of Y given X is defined as the weighted sum of \Eta(Y, X=x) for each possible value of x, using p(x) as the weights: : \begin \Eta(Y, X)\ &\equiv \sum_\,p(x)\,\Eta(Y, X=x)\\ & =-\sum_ p(x)\sum_\,p(y, x)\,\log_2\, p(y, x)\\ & =-\sum_\,p(x)p(y, x)\,\log_2\,p(y, x)\\ & =-\sum_p(x,y)\log_2 \frac . \end


Properties


Conditional entropy equals zero

\Eta(Y, X)=0 if and only if the value of Y is completely determined by the value of X.


Conditional entropy of independent random variables

Conversely, \Eta(Y, X) = \Eta(Y) if and only if Y and X are
independent random variables Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
.


Chain rule

Assume that the combined system determined by two random variables X and Y has
joint entropy In information theory, joint entropy is a measure of the uncertainty associated with a set of variables. Definition The joint Shannon entropy (in bits) of two discrete random variables X and Y with images \mathcal X and \mathcal Y is define ...
\Eta(X,Y), that is, we need \Eta(X,Y) bits of information on average to describe its exact state. Now if we first learn the value of X, we have gained \Eta(X) bits of information. Once X is known, we only need \Eta(X,Y)-\Eta(X) bits to describe the state of the whole system. This quantity is exactly \Eta(Y, X), which gives the ''chain rule'' of conditional entropy: :\Eta(Y, X)\, = \, \Eta(X,Y)- \Eta(X). The chain rule follows from the above definition of conditional entropy: :\begin \Eta(Y, X) &= \sum_p(x,y)\log \left(\frac \right) \\ pt &= \sum_p(x,y)(\log (p(x)) - \log (p(x,y))) \\ pt &= -\sum_p(x,y)\log (p(x,y)) + \sum_ \\ pt & = \Eta(X,Y) + \sum_ p(x)\log (p(x) ) \\ pt & = \Eta(X,Y) - \Eta(X). \end In general, a chain rule for multiple random variables holds: : \Eta(X_1,X_2,\ldots,X_n) = \sum_^n \Eta(X_i , X_1, \ldots, X_) It has a similar form to
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
in probability theory, except that addition instead of multiplication is used.


Bayes' rule

Bayes' rule In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
for conditional entropy states :\Eta(Y, X) \,=\, \Eta(X, Y) - \Eta(X) + \Eta(Y). ''Proof.'' \Eta(Y, X) = \Eta(X,Y) - \Eta(X) and \Eta(X, Y) = \Eta(Y,X) - \Eta(Y). Symmetry entails \Eta(X,Y) = \Eta(Y,X). Subtracting the two equations implies Bayes' rule. If Y is
conditionally independent In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
of Z given X we have: :\Eta(Y, X,Z) \,=\, \Eta(Y, X).


Other properties

For any X and Y: :\begin \Eta(Y, X) &\le \Eta(Y) \, \\ \Eta(X,Y) &= \Eta(X, Y) + \Eta(Y, X) + \operatorname(X;Y),\qquad \\ \Eta(X,Y) &= \Eta(X) + \Eta(Y) - \operatorname(X;Y),\, \\ \operatorname(X;Y) &\le \Eta(X),\, \end where \operatorname(X;Y) is the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
between X and Y. For independent X and Y: :\Eta(Y, X) = \Eta(Y) and \Eta(X, Y) = \Eta(X) \, Although the specific-conditional entropy \Eta(X, Y=y) can be either less or greater than \Eta(X) for a given
random variate In probability and statistics, a random variate or simply variate is a particular outcome of a ''random variable'': the random variates which are other outcomes of the same random variable might have different values (random numbers). A random d ...
y of Y, \Eta(X, Y) can never exceed \Eta(X).


Conditional differential entropy


Definition

The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called ''conditional differential (or continuous) entropy''. Let X and Y be a continuous random variables with a
joint probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
f(x,y). The differential conditional entropy h(X, Y) is defined as


Properties

In contrast to the conditional entropy for discrete random variables, the conditional differential entropy may be negative. As in the discrete case there is a chain rule for differential entropy: :h(Y, X)\,=\,h(X,Y)-h(X) Notice however that this rule may not be true if the involved differential entropies do not exist or are infinite. Joint differential entropy is also used in the definition of the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
between continuous random variables: :\operatorname(X,Y)=h(X)-h(X, Y)=h(Y)-h(Y, X) h(X, Y) \le h(X) with equality if and only if X and Y are independent.


Relation to estimator error

The conditional differential entropy yields a lower bound on the expected squared error of an
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the ...
. For any random variable X, observation Y and estimator \widehat the following holds: :\mathbb\left bigl(X - \widehat\bigr)^2\right \ge \frace^ This is related to the
uncertainty principle In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the accuracy with which the values for certain pairs of physic ...
from
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, ...
.


Generalization to quantum theory

In
quantum information theory Quantum information is the information of the quantum state, state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information re ...
, the conditional entropy is generalized to the
conditional quantum entropy The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory. For a bipartite state \rho^, the conditional entropy is written S(A, ...
. The latter can take negative values, unlike its classical counterpart.


See also

*
Entropy (information theory) In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \ ...
*
Mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
*
Conditional quantum entropy The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory. For a bipartite state \rho^, the conditional entropy is written S(A, ...
*
Variation of information In probability theory and information theory, the variation of information or shared information distance is a measure of the distance between two clusterings ( partitions of elements). It is closely related to mutual information; indeed, it is a ...
*
Entropy power inequality In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entrop ...
*
Likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...


References

{{Reflist Entropy and information Information theory